2,113 research outputs found

    Hybrid Mechanisms for On-Demand Transport

    Get PDF
    Peer reviewedPostprin

    Speedups for Multi-Criteria Urban Bicycle Routing

    Get PDF

    Personalized fully multimodal journey planner

    Get PDF
    We present an advanced journey planner designed to help travellers to take full advantage of the increasingly rich, and consequently more complex offering of mobility services available in modern cities. In contrast to existing systems, our journey planner is capable of planning with the full spectrum of mobility services; combining individual and collective, fixed-schedule as well as on-demand modes of transport, while taking into account individual user preferences and the availability of transport services. Furthermore, the planner is able to personalize journey planning for each individual user by employing a recommendation engine that builds a contextual model of the user from the observation of user’s past travel choices. The planner has been deployed in four large European cities and positively evaluated by hundreds of users in field trialsPeer ReviewedPostprint (published version

    Controllable Image Generation via Collage Representations

    Full text link
    Recent advances in conditional generative image models have enabled impressive results. On the one hand, text-based conditional models have achieved remarkable generation quality, by leveraging large-scale datasets of image-text pairs. To enable fine-grained controllability, however, text-based models require long prompts, whose details may be ignored by the model. On the other hand, layout-based conditional models have also witnessed significant advances. These models rely on bounding boxes or segmentation maps for precise spatial conditioning in combination with coarse semantic labels. The semantic labels, however, cannot be used to express detailed appearance characteristics. In this paper, we approach fine-grained scene controllability through image collages which allow a rich visual description of the desired scene as well as the appearance and location of the objects therein, without the need of class nor attribute labels. We introduce "mixing and matching scenes" (M&Ms), an approach that consists of an adversarially trained generative image model which is conditioned on appearance features and spatial positions of the different elements in a collage, and integrates these into a coherent image. We train our model on the OpenImages (OI) dataset and evaluate it on collages derived from OI and MS-COCO datasets. Our experiments on the OI dataset show that M&Ms outperforms baselines in terms of fine-grained scene controllability while being very competitive in terms of image quality and sample diversity. On the MS-COCO dataset, we highlight the generalization ability of our model by outperforming DALL-E in terms of the zero-shot FID metric, despite using two magnitudes fewer parameters and data. Collage based generative models have the potential to advance content creation in an efficient and effective way as they are intuitive to use and yield high quality generations
    corecore